DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...72
Hits 1 – 20 of 1.423

1
Characterizing News Portrayal of Civil Unrest in Hong Kong, 1998–2020 ...
BASE
Show details
2
Jibes & Delights: A Dataset of Targeted Insults and Compliments to Tackle Online Abuse​ ...
BASE
Show details
3
Bird’s Eye: Probing for Linguistic Graph Structures with a Simple Information-Theoretic Approach ...
BASE
Show details
4
Dependency Patterns of Complex Sentences and Semantic Disambiguation for Abstract Meaning Representation Parsing ...
BASE
Show details
5
Phrase-Level Action Reinforcement Learning for Neural Dialog Response Generation ...
BASE
Show details
6
10D: Phonology, Morphology and Word Segmentation #1 ...
BASE
Show details
7
Sample-efficient Linguistic Generalizations through Program Synthesis: Experiments with Phonology Problems ...
BASE
Show details
8
PHMOSpell: Phonological and Morphological Knowledge Guided Chinese Spelling Check ...
BASE
Show details
9
19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology - Part 2 ...
BASE
Show details
10
18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology - Part 1 ...
BASE
Show details
11
The Match-Extend Serialization Algorithm in Multiprecedence ...
BASE
Show details
12
Recognizing Reduplicated Forms: Finite-State Buffered Machines ...
BASE
Show details
13
Correcting Chinese Spelling Errors with Phonetic Pre-training ...
BASE
Show details
14
PLOME: Pre-training with Misspelled Knowledge for Chinese Spelling Correction ...
Abstract: Read paper: https://www.aclanthology.org/2021.acl-long.233 Abstract: Chinese spelling correction (CSC) is a task to detect and correct spelling errors in texts. CSC is essentially a linguistic problem, thus the ability of language understanding is crucial to this task. In this paper, we propose a Pre-trained masked Language model with Misspelled knowledgE (PLOME) for CSC, which jointly learns how to understand language and correct spelling errors. To this end, PLOME masks the chosen tokens with similar characters according to a confusion set rather than the fixed token ``[MASK]" as in BERT. Besides character prediction, PLOME also introduces pronunciation prediction to learn the misspelled knowledge on phonic level. Moreover, phonological and visual similarity knowledge is important to this task. PLOME utilizes GRU networks to model such knowledge based on characters' phonics and strokes. Experiments are conducted on widely used benchmarks. Our method achieves superior performance against state-of-the-art ...
Keyword: Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
URL: https://underline.io/lecture/25563-plome-pre-training-with-misspelled-knowledge-for-chinese-spelling-correction
https://dx.doi.org/10.48448/hvyh-zh15
BASE
Hide details
15
Including Signed Languages in Natural Language Processing ...
BASE
Show details
16
When is Char Better Than Subword: A Systematic Study of Segmentation Algorithms for Neural Machine Translation ...
BASE
Show details
17
The Reading Machine: a Versatile Framework for Studying Incremental Parsing Strategies ...
BASE
Show details
18
To POS Tag or Not to POS Tag: The Impact of POS Tags on Morphological Learning in Low-Resource Settings ...
BASE
Show details
19
An FST morphological analyzer for the Gitksan language ...
BASE
Show details
20
Superbizarre Is Not Superb: Derivational Morphology Improves BERT's Interpretation of Complex Words ...
BASE
Show details

Page: 1 2 3 4 5...72

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
1.423
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern